39 research outputs found

    C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework

    Full text link
    Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an L1-regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsity-inducing property of the Lasso model at the individual feature level, with the block-sparsity property of the Group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for applications such as source identification and separation. An efficient optimization procedure, which guarantees convergence to the global optimum, is developed for these new models. The underlying presentation of the new framework and optimization approach is complemented with experimental examples and theoretical results regarding recovery guarantees for the proposed models

    Collaborative Hierarchical Sparse Modeling

    Full text link
    Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is done by solving an l_1-regularized linear regression problem, usually called Lasso. In this work we first combine the sparsity-inducing property of the Lasso model, at the individual feature level, with the block-sparsity property of the group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the hierarchical Lasso, which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level but not necessarily at the lower one. Signals then share the same active groups, or classes, but not necessarily the same active set. This is very well suited for applications such as source separation. An efficient optimization procedure, which guarantees convergence to the global optimum, is developed for these new models. The underlying presentation of the new framework and optimization approach is complemented with experimental examples and preliminary theoretical results.Comment: To appear in CISS 201

    Accelerating Eulerian Fluid Simulation With Convolutional Networks

    Full text link
    Efficient simulation of the Navier-Stokes equations for fluid flow is a long standing problem in applied mathematics, for which state-of-the-art methods require large compute resources. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to obtain fast and highly realistic simulations. Our method solves the incompressible Euler equations using the standard operator splitting method, in which a large sparse linear system with many free parameters must be solved. We use a Convolutional Network with a highly tailored architecture, trained using a novel unsupervised learning framework to solve the linear system. We present real-time 2D and 3D simulations that outperform recently proposed data-driven methods; the obtained results are realistic and show good generalization properties.Comment: Significant revisio

    Clasificación y promediado de volúmenes de tomografía electrónica

    Get PDF
    La tomografía electrónica brinda la posibilidad de determinar estructuras tridimensionales de material biológico a niveles de resolución suficientemente altos como para permitir la identificación de macromoléculas individuales tales como proteínas. Esto ha despertado un enorme interés en la comunidad científica en los últimos tiempos. El procesamiento de dichas imágenes tridimensionales constituye un problema desafiante debido a los muy bajos niveles de la relación señal a ruido que presentan. Esto hace que los volúmenes individuales prácticamente carezcan de valor, debido a que éstos son simplemente demasiado ruidosos como para permitir su correcta visualización y mucho menos su interpretación estructural. Resulta imprescindible entonces la utilización de técnicas de promediado que combinando gran cantidad de volúmenes logren aumentar drásticamente el nivel de señal en la imagen. Este tipo de técnicas ha sido de práctica frecuente en las últimas décadas en el área de microscopía electrónica conocida con el nombre de análisis de partículas individuales, donde se han alcanzando resultados sorprendentes. La tomografía electrónica tiene sin embargo diferencias sustanciales con las técnicas de partículas individuales, debido sobre todo al hecho de que las imágenes tomo-gráficas son tridimensionales. Se introducen una serie de nuevos problemascuyo correcto manejo es indispensable para alcanzar una solución satisfactoria. Entre dichas diferencias se destacan el problema de lidiar con el efecto conocido con el nombre de missing wedge, característico de las imágenes de tomografía electrónica, así como la necesidad de contar con algoritmos eficientes de registrado y clasificación de volúmenes. En esta tesis de maestría se presenta un estudio del problema descrito, analizando las soluciones hasta ahora propuestas en la comunidad para luego proponer una solución original. De esta manera se llega a una herramienta poderosa que cumple con todos los requisitos establecidos. Se presta particular atención en compararla con soluciones existente y en realizar experimentos, con datos artificiales y reales, que permitan su validación. A través de dichos experimentos se logra identificar con claridad cual es el verdadero alcance que tiene la herramienta desarrollada y bajo que condiciones es capaz de distinguir diferentes conformaciones de material biológico

    Fast deep reinforcement learning using online adjustments from the past

    Get PDF
    We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by planning over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVAis performant on a demonstration task and Atari games.Comment: Accepted at NIPS 201
    corecore